# High compression ratio
Vit Base Patch16 224 Int8 Static Inc
Apache-2.0
This is an INT8 PyTorch model statically quantized using Intel® Neural Compressor post-training, based on Google's ViT model fine-tuning, significantly reducing model size while maintaining high accuracy.
Image Classification
Transformers

V
Intel
82
1
Minilmv2 L6 H384 RoBERTa Large
MiniLM v2 is a lightweight language model distilled by Microsoft from RoBERTa-Large, featuring high efficiency and compactness.
Large Language Model
Transformers

M
torbenal
15
0
Roberta2roberta L 24 Bbc
Apache-2.0
An encoder-decoder model based on RoBERTa architecture, specifically designed for extreme summarization tasks, fine-tuned on the BBC XSum dataset.
Text Generation
Transformers English

R
google
959
3
Featured Recommended AI Models